Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model. The code and pre-trained models can be downloaded at https://github.com/researchmm/MM-Diffusion.
translated by 谷歌翻译
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows. Project page: https://3d-front-future.github.io/LighTNet .
translated by 谷歌翻译
AI Illustrator旨在自动设计具有视觉吸引力的图像,以激发丰富的思想和情感。为了实现这一目标,我们提出了一个框架,将具有复杂语义的原始描述转换为语义相应的图像。主要的挑战在于原始描述语义的复杂性,可能很难可视化(\ textit {e}。通常,它对现有方法构成了处理此类描述的挑战。为了解决这个问题,我们建议基于rompt \ textbf {c} ross- \ textbf {m} odal generation \ textbf {frame} work(pcm-frame)利用两个强大的预培养模型,,包括剪辑和Stylegan。我们的框架由两个组件组成:\ textIt {textIt嵌入} s到\ textit {image嵌入} s的投影模块,基于提示以及一个构建的适应图像生成模块,该模块构建了\ textit {image嵌入{image Embedding} s作为输入并受到共同语义一致性损失的训练。为了弥合现实图像和插图设计之间的差距,我们进一步采用了风格化模型作为后处理,以获得更好的视觉效果。受益于预先训练的模型,我们的方法可以处理复杂的描述,并且不需要外部配对数据进行培训。此外,我们已经建立了一个由200个原始描述组成的基准。我们进行了一项用户研究,以证明我们对复杂文本的竞争方法的优势。我们在https://github.com/researchmm/ai \ _illustrator} {https://github.com/researchmem/researchmm/ai \_illustrator上发布代码
translated by 谷歌翻译
图像增强旨在通过修饰颜色和音调来提高照片的美学视觉质量,并且是专业数字摄影的必不可少的技术。近年来,基于学习的图像增强算法已达到有希望的表现,并吸引了日益普及。但是,典型的努力试图为所有像素的颜色转换构建一个均匀的增强子。它忽略了对照片重要的不同内容(例如,天空,海洋等)之间的像素差异,从而导致结果不令人满意。在本文中,我们提出了一个新颖的可学习背景知觉的4维查找表(4D LUT),该表通过适应性地学习照片上下文来实现每个图像中不同内容的增强。特别是,我们首先引入一个轻量级上下文编码器和一个参数编码器,以分别学习像素级类别的上下文图和一组图像自适应系数。然后,通过通过系数集成多个基础4D LUT来生成上下文感知的4D LUT。最后,可以通过将源图像和上下文图馈入融合的上下文感知的4D〜LUT来获得增强的图像。与传统的3D LUT(即RGB映射到RGB)相比,通常用于摄像机成像管道系统或工具,4D LUT,即RGBC(RGB+上下文)映射到RGB,可实现具有不同像素的颜色转换的最佳控制每个图像中的内容,即使它们具有相同的RGB值。实验结果表明,我们的方法在广泛使用的基准中优于其他最先进的方法。
translated by 谷歌翻译
关于语言引导的图像操纵的最新作品在提供丰富的语义方面表现出了极大的语言力量,尤其是对于面部图像。但是,语言中的其他自然信息,动作的探索较少。在本文中,我们利用运动信息并研究一项新颖的任务,语言引导的面部动画,旨在在语言的帮助下对静态面部图像进行动画。为了更好地利用语言的语义和动作,我们提出了一个简单而有效的框架。具体而言,我们提出了一个经常性运动生成器,以从语言中提取一系列语义和运动信息,并将其与视觉信息一起提供给预训练的样式,以生成高质量的帧。为了优化所提出的框架,提出了三个精心设计的损失功能,包括保持面部身份的正规化损失,路径长度正规化损失以确保运动平滑度和对比度损失,以在一个模型中使用各种语言指导启用视频综合。对不同领域的定性和定量评估进行了广泛的实验(\ textit {ef。语。代码将在https://github.com/tiankaihang/language-guided-animation.git上找到。
translated by 谷歌翻译
压缩视频超分辨率(VSR)旨在从压缩的低分辨率对应物中恢复高分辨率帧。最近的VSR方法通常通过借用相邻视频帧的相关纹理来增强输入框架。尽管已经取得了一些进展,但是从压缩视频中有效提取和转移高质量纹理的巨大挑战,这些视频通常会高度退化。在本文中,我们提出了一种用于压缩视频超分辨率(FTVSR)的新型频率转换器,该频率在联合时空频域中进行自我注意。首先,我们将视频框架分为斑块,然后将每个贴片转换为DCT光谱图,每个通道代表频带。这样的设计使每个频带都可以进行细粒度的自我注意力,因此可以将真实的视觉纹理与伪影区分开,并进一步用于视频框架修复。其次,我们研究了不同的自我发场方案,并发现在对每个频带上应用暂时关注之前,会引起关节空间的注意力,从而带来最佳的视频增强质量。两个广泛使用的视频超分辨率基准的实验结果表明,FTVSR在未压缩和压缩视频的最先进的方法中都具有清晰的视觉边距。代码可在https://github.com/researchmm/ftvsr上找到。
translated by 谷歌翻译
最近的几项著作从经验上发现,鉴定学习率对于神经网络结构修剪的最终表现至关重要。进一步的研究发现,网络训练性通过修剪答案而破坏,因此呼吁迫切需要在填充之前恢复训练性。现有的尝试建议利用重量正交化以实现动态等轴测图,以提高训练性。但是,它们仅适用于线性MLP网络。如何开发一种维护或恢复可训练性并且可扩展到现代深网的过滤器修剪方法仍然难以捉摸。在本文中,我们提出了维护修剪(TPP)的训练性,这是一种基于正则化的结构化修剪方法,可以有效地维持稀疏期间的训练性。具体而言,TPP将卷积内核的革兰氏矩阵正规化,以从保存的过滤器中解除修剪过滤器。除了卷积层外,我们还建议将BN参数正规化,以更好地保留训练性。从经验上讲,TPP可以与线性MLP网络上的基地真相动力学恢复方法竞争。在非线性网络(RESNET56/VGG19,CIFAR数据集)上,它的表现优于其他对应解决方案。此外,与许多表现最好的滤镜修剪方法相比,TPP还可以在ImageNet上与现代深层网络(RESENET)有效地工作,从而提供了令人鼓舞的性能。据我们所知,这是第一种在大规模深度神经网络修剪过程中有效维持训练性的第一种方法。
translated by 谷歌翻译